首页> 外文OA文献 >Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups
【2h】

Deep Roots: Improving CNN Efficiency with Hierarchical Filter Groups

机译:深层根源:利用分层过滤器组提高CNN效率

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。
获取外文期刊封面目录资料

摘要

We propose a new method for creating computationally efficient and compactconvolutional neural networks (CNNs) using a novel sparse connection structurethat resembles a tree root. This allows a significant reduction incomputational cost and number of parameters compared to state-of-the-art deepCNNs, without compromising accuracy, by exploiting the sparsity of inter-layerfilter dependencies. We validate our approach by using it to train moreefficient variants of state-of-the-art CNN architectures, evaluated on theCIFAR10 and ILSVRC datasets. Our results show similar or higher accuracy thanthe baseline architectures with much less computation, as measured by CPU andGPU timings. For example, for ResNet 50, our model has 40% fewer parameters,45% fewer floating point operations, and is 31% (12%) faster on a CPU (GPU).For the deeper ResNet 200 our model has 25% fewer floating point operations and44% fewer parameters, while maintaining state-of-the-art accuracy. ForGoogLeNet, our model has 7% fewer parameters and is 21% (16%) faster on a CPU(GPU).
机译:我们提出了一种使用类似于树根的稀疏连接结构来创建计算效率高的紧凑卷积神经网络(CNN)的新方法。通过利用层间滤波器相关性的稀疏性,与最新的DeepCNN相比,这可以显着降低计算成本和参数数量,而不会影响准确性。我们通过使用它来训练更先进的CNN架构变体(在CIFAR10和ILSVRC数据集上进行评估)来验证我们的方法。我们的结果表明,与基线架构相比,其准确性或准确性更高,而计算量却要少得多(由CPU和GPU时序衡量)。例如,对于ResNet 50,我们的模型减少了40%的参数,减少了45%的浮点运算,并且在CPU(GPU)上的速度提高了31%(12%)。对于更深的ResNet 200,我们的模型的浮点数减少了25%点操作,参数减少44%,同时保持最新的准确性。对于GoGoLeLeNet,我们的模型参数减少了7%,而在CPU(GPU)上的速度提高了21%(16%)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号